Content On This Page | ||
---|---|---|
Introduction to Determinants | Properties of Determinants | Area of a Triangle using Determinants |
Adjoint of a Square Matrix |
Determinants and Adjoint
Introduction to Determinants
In mathematics, associated with every square matrix is a unique scalar value called its determinant. The determinant of a matrix $A$ is a single number, computed from the elements of the matrix, that provides valuable information about the matrix. It is denoted by $\det(A)$, $\text{Det}(A)$, or most commonly by enclosing the matrix elements within vertical bars, like $|A|$ or $\begin{vmatrix} \dots \end{vmatrix}$.
Determinants are fundamental in linear algebra and have wide-ranging applications. They are used to determine if a matrix is invertible (a matrix is invertible if and only if its determinant is non-zero), to solve systems of linear equations using Cramer's rule, to calculate the area of a triangle or the volume of a parallelepiped defined by vectors, and in various formulas in calculus and other areas.
An important point to remember is that the determinant is defined only for square matrices. A non-square matrix does not have a determinant.
Determinant of a $1 \times 1$ Matrix
For the simplest square matrix, a $1 \times 1$ matrix $A = [a_{11}]$, the determinant is defined to be the value of the single element itself.
If $A = [a_{11}]$, then $\det(A) = |A| = a_{11} $
... (i)
Example of $1 \times 1$ Determinant
- Let $A = [5]$. Then $\det(A) = |5| = 5$.
- Let $B = [-3]$. Then $\det(B) = |-3| = -3$.
- Let $C = [0]$. Then $\det(C) = |0| = 0$.
Determinant of a $2 \times 2$ Matrix
For a $2 \times 2$ square matrix $A = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}$, the determinant is calculated using a simple formula involving the products of elements along the diagonals.
The determinant is the product of the elements on the main diagonal ($a_{11} \times a_{22}$) minus the product of the elements on the anti-diagonal ($a_{21} \times a_{12}$).
If $A = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}$, then $\det(A) = |A| = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{vmatrix} = a_{11}a_{22} - a_{21}a_{12} $
... (ii)
Example of $2 \times 2$ Determinant
Example 1. Calculate the determinant of the matrix $A = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix}$.
Answer:
The given matrix is $A = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix}$. This is a $2 \times 2$ matrix with $a_{11}=2, a_{12}=3, a_{21}=4, a_{22}=5$.
Using the formula for a $2 \times 2$ determinant:
$$ \det(A) = (a_{11} \times a_{22}) - (a_{21} \times a_{12}) $$ $$ \det(A) = (2 \times 5) - (4 \times 3) $$ $$ \det(A) = 10 - 12 $$ $$ \det(A) = -2 $$The determinant of matrix $A$ is -2.
Determinant of a $3 \times 3$ Matrix
Calculating the determinant of a $3 \times 3$ matrix or larger involves a process called expansion by minors and cofactors. The determinant can be expanded along any row or any column of the matrix.
For a $3 \times 3$ matrix $A = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix}$, let's first define the necessary terms.
Minors and Cofactors
The minor of an element $a_{ij}$ (the element in the $i$-th row and $j$-th column) of a matrix $A$, denoted by $M_{ij}$, is the determinant of the square submatrix obtained by deleting the $i$-th row and the $j$-th column of the original matrix $A$. For a $3 \times 3$ matrix, deleting one row and one column results in a $2 \times 2$ submatrix, and its determinant is the minor.
The cofactor of an element $a_{ij}$, denoted by $C_{ij}$, is the minor $M_{ij}$ multiplied by $(-1)^{i+j}$. The formula relating the cofactor and the minor is:
$C_{ij} = (-1)^{i+j} M_{ij} $
... (iii)
The factor $(-1)^{i+j}$ introduces a sign based on the position of the element. The sign pattern for the cofactors in a $3 \times 3$ matrix is:
$$ \begin{bmatrix} (-1)^{1+1} & (-1)^{1+2} & (-1)^{1+3} \\ (-1)^{2+1} & (-1)^{2+2} & (-1)^{2+3} \\ (-1)^{3+1} & (-1)^{3+2} & (-1)^{3+3} \end{bmatrix} = \begin{bmatrix} + & - & + \\ - & + & - \\ + & - & + \end{bmatrix} $$Expansion of Determinant along a Row or Column
The determinant of a square matrix can be computed by selecting any single row or any single column and summing the products of each element in that row or column with its corresponding cofactor. This is called cofactor expansion.
For a $3 \times 3$ matrix $A = [a_{ij}]$, the determinant can be found by expanding along the first row:
$$ \det(A) = |A| = a_{11}C_{11} + a_{12}C_{12} + a_{13}C_{13} $$Substituting the definition of the cofactors $C_{ij} = (-1)^{i+j} M_{ij}$:
$$ \det(A) = a_{11}(-1)^{1+1}M_{11} + a_{12}(-1)^{1+2}M_{12} + a_{13}(-1)^{1+3}M_{13} $$ $$ \det(A) = a_{11}M_{11} - a_{12}M_{12} + a_{13}M_{13} \quad \text{... (iv)} $$Here, $M_{11}$ is the determinant of the $2 \times 2$ submatrix obtained by removing the 1st row and 1st column of $A$. $M_{12}$ is the determinant of the $2 \times 2$ submatrix obtained by removing the 1st row and 2nd column of $A$. $M_{13}$ is the determinant of the $2 \times 2$ submatrix obtained by removing the 1st row and 3rd column of $A$.
$$ M_{11} = \begin{vmatrix} a_{22} & a_{23} \\ a_{32} & a_{33} \end{vmatrix} = a_{22}a_{33} - a_{32}a_{23} $$ $$ M_{12} = \begin{vmatrix} a_{21} & a_{23} \\ a_{31} & a_{33} \end{vmatrix} = a_{21}a_{33} - a_{31}a_{23} $$ $$ M_{13} = \begin{vmatrix} a_{21} & a_{22} \\ a_{31} & a_{32} \end{vmatrix} = a_{21}a_{32} - a_{31}a_{22} $$Substituting these expressions for the minors back into the formula (iv) gives the full expansion for the determinant of a $3 \times 3$ matrix:
$$ \det(A) = a_{11}(a_{22}a_{33} - a_{32}a_{23}) - a_{12}(a_{21}a_{33} - a_{31}a_{23}) + a_{13}(a_{21}a_{32} - a_{31}a_{22}) \quad \text{... (v)} $$While this formula looks complex, the process of calculating minors and cofactors and expanding along a row or column is systematic.
Example of $3 \times 3$ Determinant Calculation
Example 2. Calculate the determinant of the matrix $B = \begin{bmatrix} 1 & 2 & -1 \\ 3 & 0 & 4 \\ 5 & -2 & 6 \end{bmatrix}$ by expanding along the first row.
Answer:
The given matrix is $B = \begin{bmatrix} 1 & 2 & -1 \\ 3 & 0 & 4 \\ 5 & -2 & 6 \end{bmatrix}$. We will expand along the first row, whose elements are $b_{11}=1, b_{12}=2, b_{13}=-1$.
First, find the minors of the elements in the first row:
$M_{11}$ is the determinant of the submatrix after deleting the 1st row and 1st column:
$$ M_{11} = \begin{vmatrix} 0 & 4 \\ -2 & 6 \end{vmatrix} = (0 \times 6) - (-2 \times 4) = 0 - (-8) = 8 $$$M_{12}$ is the determinant of the submatrix after deleting the 1st row and 2nd column:
$$ M_{12} = \begin{vmatrix} 3 & 4 \\ 5 & 6 \end{vmatrix} = (3 \times 6) - (5 \times 4) = 18 - 20 = -2 $$$M_{13}$ is the determinant of the submatrix after deleting the 1st row and 3rd column:
$$ M_{13} = \begin{vmatrix} 3 & 0 \\ 5 & -2 \end{vmatrix} = (3 \times -2) - (5 \times 0) = -6 - 0 = -6 $$Next, calculate the cofactors of the elements in the first row using $C_{ij} = (-1)^{i+j} M_{ij}$:
$$ C_{11} = (-1)^{1+1} M_{11} = (+1)(8) = 8 $$ $$ C_{12} = (-1)^{1+2} M_{12} = (-1)(-2) = 2 $$ $$ C_{13} = (-1)^{1+3} M_{13} = (+1)(-6) = -6 $$Now, expand the determinant along the first row using $\det(B) = b_{11}C_{11} + b_{12}C_{12} + b_{13}C_{13}$:
$$ \det(B) = (1)(8) + (2)(2) + (-1)(-6) $$ $$ \det(B) = 8 + 4 + 6 $$ $$ \det(B) = 18 $$So, the determinant of matrix $B$ is 18.
Alternative Expansion (Along Second Column)
To demonstrate that the choice of row or column does not affect the result, let's calculate the determinant of matrix $B$ by expanding along the second column. The elements of the second column are $b_{12}=2, b_{22}=0, b_{32}=-2$. The sign pattern for the second column is $\begin{bmatrix} - \\ + \\ - \end{bmatrix}$.
The formula for expansion along the second column is $\det(B) = b_{12}C_{12} + b_{22}C_{22} + b_{32}C_{32}$.
We already calculated $C_{12} = 2$.
Calculate $M_{22}$ (minor of $b_{22}$ - remove row 2, column 2):
$$ M_{22} = \begin{vmatrix} 1 & -1 \\ 5 & 6 \end{vmatrix} = (1 \times 6) - (5 \times -1) = 6 - (-5) = 11 $$Cofactor $C_{22} = (-1)^{2+2} M_{22} = (+1)(11) = 11$.
Calculate $M_{32}$ (minor of $b_{32}$ - remove row 3, column 2):
$$ M_{32} = \begin{vmatrix} 1 & -1 \\ 3 & 4 \end{vmatrix} = (1 \times 4) - (3 \times -1) = 4 - (-3) = 7 $$Cofactor $C_{32} = (-1)^{3+2} M_{32} = (-1)(7) = -7$.
Now, expand along the second column:
$$ \det(B) = (b_{12} \times C_{12}) + (b_{22} \times C_{22}) + (b_{32} \times C_{32}) $$ $$ \det(B) = (2 \times 2) + (0 \times 11) + (-2 \times -7) $$ $$ \det(B) = 4 + 0 + 14 $$ $$ \det(B) = 18 $$The result is 18, which is the same as the determinant calculated by expanding along the first row. Choosing a row or column with zeros can simplify the calculation, as the term $a_{ij}C_{ij}$ will be zero if $a_{ij}$ is zero.
Determinants are a fundamental concept in matrix theory, providing a single number that encapsulates key properties of a square matrix.
Properties of Determinants
Determinants are not just abstract numbers calculated from matrices; they possess a rich set of properties that make them invaluable tools in linear algebra. These properties provide insights into how determinants behave under various matrix operations and manipulations. Understanding these properties can significantly simplify the process of calculating determinants and are essential for proving numerous theorems.
Key Properties of Determinants
Let $A$ and $B$ be square matrices of the same order $n$, and let $k$ be a scalar. The following are some of the key properties of determinants:
Property 1: Determinant of the Transpose
The determinant of a square matrix remains the same when its rows are changed into columns and its columns are changed into rows. In other words, the determinant of a matrix is equal to the determinant of its transpose.
$\det(A^T) = \det(A) $
... (i)
Significance: This property implies that any theorem or property applicable to the rows of a determinant is also applicable to its columns, and vice versa. It allows us to work with column operations with the same understanding of their effect on the determinant as we have for row operations.
Property 2: Interchange of Rows or Columns
If any two rows or any two columns of a square matrix are interchanged, the sign of its determinant is reversed (i.e., the determinant is multiplied by $-1$).
Example: For a $2 \times 2$ matrix, $\begin{vmatrix} a & b \\ c & d \end{vmatrix} = ad - bc$. If we interchange the rows, $\begin{vmatrix} c & d \\ a & b \end{vmatrix} = cb - ad = -(ad - bc)$.
Notation: If a matrix $A'$ is obtained from a matrix $A$ by interchanging any two rows (or columns), then $\det(A') = -\det(A)$.
Consequence: If a square matrix has two identical rows or two identical columns, its determinant is 0. This is because if we interchange the two identical rows/columns, the matrix remains unchanged ($A' = A$), but its determinant must change sign ($\det(A') = -\det(A)$). So, $\det(A) = -\det(A)$, which implies $2\det(A) = 0$, and thus $\det(A) = 0$.
Property 3: Multiplication by a Scalar
If all the elements of a single row or a single column of a square matrix are multiplied by a non-zero scalar $k$, then the determinant of the new matrix is $k$ times the determinant of the original matrix.
Example: For a $2 \times 2$ matrix, $\begin{vmatrix} ka & kb \\ c & d \end{vmatrix} = (ka)d - c(kb) = kad - kcb = k(ad - cb) = k \begin{vmatrix} a & b \\ c & d \end{vmatrix}$.
Notation: If a matrix $A'$ is obtained from a matrix $A$ by multiplying a single row (or column) by a scalar $k$, then $\det(A') = k \det(A)$.
Consequence: If all elements of an $n \times n$ matrix $A$ are multiplied by a scalar $k$ (which means multiplying each of the $n$ rows by $k$), the resulting matrix is $kA$. The determinant of $kA$ is $\det(kA) = k^n \det(A)$.
Also, if a matrix has a row or a column consisting entirely of zeros, its determinant is 0. This can be seen as multiplying that row/column by $k=0$, so $\det(A') = 0 \times \det(A) = 0$.
Property 4: Sum of Elements in a Row or Column
If each element of a row (or a column) of a square matrix is expressed as the sum of two or more terms, then the determinant of the matrix can be expressed as the sum of two or more determinants of matrices obtained by splitting that row (or column).
Example: For a $2 \times 2$ matrix, $\begin{vmatrix} a+x & b+y \\ c & d \end{vmatrix} = (a+x)d - c(b+y) = ad + xd - cb - cy = (ad - cb) + (xd - cy) = \begin{vmatrix} a & b \\ c & d \end{vmatrix} + \begin{vmatrix} x & y \\ c & d \end{vmatrix}$.
Significance: This property allows us to decompose determinants with more complex entries in a row or column into simpler determinants.
Property 5: Row or Column Operations (Elementary Operation of Type 3)
If to the elements of any row (or column), the corresponding elements of any other row (or column) multiplied by a scalar $k$ are added, then the determinant of the matrix remains unchanged.
If $A'$ is obtained from $A$ by $R_i \to R_i + k R_j$ or $C_i \to C_i + k C_j$, then $\det(A') = \det(A) $
... (ii)
Significance: This is the most powerful property for simplifying determinants in practice. By applying this operation, we can create zeros in the matrix without changing the determinant's value. This allows us to expand the determinant along a row or column with many zeros, significantly reducing the calculation complexity.
Derivation (for $2 \times 2$ case $R_1 \to R_1 + k R_2$):
Let $A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$. The determinant is $\det(A) = ad - bc$.
Apply the operation $R_1 \to R_1 + k R_2$ to get the new matrix $A' = \begin{bmatrix} a+kc & b+kd \\ c & d \end{bmatrix}$.
Calculate the determinant of $A'$:
$$ \det(A') = (a+kc)d - c(b+kd) $$Distribute the terms:
$$ \det(A') = ad + kcd - cb - ckd $$Rearrange and observe that the terms $kcd$ and $ckd$ are equal and have opposite signs, so they cancel out:
$$ \det(A') = ad - cb $$Since $ad - cb = ad - bc$ (multiplication is commutative), we have:
$$ \det(A') = ad - bc = \det(A) $$The determinant remains unchanged. The proof for larger matrices follows similar logic, often using the property that a determinant is multilinear in its rows/columns and is zero if two rows/columns are identical.
Property 6: Proportional Rows or Columns
If any two rows or any two columns of a square matrix are proportional (meaning one row is a scalar multiple of another row, or one column is a scalar multiple of another column), then its determinant is 0.
Explanation: This is a consequence of Properties 3 and 2. Suppose row $i$ is $k$ times row $j$ ($R_i = k R_j$, with $i \neq j$). We can factor out the scalar $k$ from row $i$ using Property 3: $\det(A) = k \det(A')$, where $A'$ has row $i$ equal to row $j$. Since $A'$ has two identical rows, its determinant is 0 by Property 2. Thus, $\det(A) = k \times 0 = 0$. The same logic applies to proportional columns.
Property 7: Determinant of a Triangular Matrix
The determinant of a triangular matrix (which can be an upper triangular matrix, a lower triangular matrix, or a diagonal matrix) is simply the product of the elements on its main diagonal.
Explanation: If you expand the determinant of a triangular matrix using cofactor expansion along a row or column that contains many zeros, you'll see that all terms in the expansion become zero except for the one involving the diagonal element. For instance, expanding an upper triangular matrix $\begin{bmatrix} a & b & c \\ 0 & d & e \\ 0 & 0 & f \end{bmatrix}$ along the first column: $\det(A) = a \cdot C_{11} + 0 \cdot C_{21} + 0 \cdot C_{31} = a \cdot (-1)^{1+1} M_{11} = a \begin{vmatrix} d & e \\ 0 & f \end{vmatrix} = a(df - e \cdot 0) = adf$. This pattern continues for any size triangular matrix.
Property 8: Determinant of a Product of Matrices
For any two square matrices $A$ and $B$ of the same order, the determinant of their product $AB$ is equal to the product of their individual determinants.
$\det(AB) = \det(A) \det(B) $
... (iii)
Significance: This is a very important property that relates matrix multiplication to scalar multiplication of determinants. Note that $\det(A+B) \neq \det(A) + \det(B)$ in general.
Property 9: Determinant of the Inverse
A square matrix $A$ is invertible if and only if its determinant is non-zero ($\det(A) \neq 0$). If $A$ is invertible, the determinant of its inverse matrix $A^{-1}$ is the reciprocal of the determinant of $A$.
If $A$ is invertible, $\det(A^{-1}) = \frac{1}{\det(A)} $
... (iv)
Derivation: If $A$ is invertible, its inverse $A^{-1}$ exists and satisfies the definition $A A^{-1} = I_n$. Taking the determinant of both sides of this equation:
$$ \det(A A^{-1}) = \det(I_n) $$Using Property 8, the determinant of the product on the left side is the product of the determinants:
$$ \det(A) \det(A^{-1}) = \det(I_n) $$The determinant of the identity matrix $I_n$ (which is a diagonal matrix with 1s on the diagonal) is the product of its diagonal elements, which is $1 \times 1 \times \ldots \times 1 = 1$.
$$ \det(A) \det(A^{-1}) = 1 $$Since $A$ is invertible, its determinant $\det(A)$ must be non-zero (otherwise, if $\det(A)=0$, we would have $0 \times \det(A^{-1}) = 1$, which is $0=1$, a contradiction). Because $\det(A) \neq 0$, we can divide both sides by $\det(A)$ to find the determinant of the inverse:
$$ \det(A^{-1}) = \frac{1}{\det(A)} $$These properties are fundamental for understanding determinants and are widely used in various matrix operations and applications.
Summary Table of Determinant Properties:
Property | Description | Effect on $\det(A)$ |
---|---|---|
1. Transpose | $A^T$ vs $A$ | $\det(A^T) = \det(A)$ (No change) |
2. Row/Column Interchange | Swap $R_i \leftrightarrow R_j$ or $C_i \leftrightarrow C_j$ | Sign changes ($-\det(A)$) |
3. Scalar Multiplication (Row/Column) | $R_i \to kR_i$ or $C_i \to kC_i$ | Multiplied by $k$ ($k \det(A)$) |
4. Row/Column Operation (Addition) | $R_i \to R_i + kR_j$ or $C_i \to C_i + kC_j$ | No change ($\det(A)$) |
5. Zero Row/Column | A row or column consists entirely of zeros | Determinant is 0 |
6. Identical/Proportional Rows/Columns | Two rows or columns are identical or proportional | Determinant is 0 |
7. Triangular Matrix | Upper, Lower, or Diagonal matrix | Product of diagonal elements |
8. Determinant of Product | $\det(AB)$ | $\det(A) \det(B)$ |
9. Determinant of Inverse | $\det(A^{-1})$ for invertible $A$ | $1/\det(A)$ |
These properties provide a powerful toolkit for manipulating and calculating determinants efficiently, especially for larger matrices where direct cofactor expansion can be computationally intensive.
Area of a Triangle using Determinants
In coordinate geometry, we have formulas to calculate the area of a triangle given the coordinates of its vertices. Determinants provide an elegant and concise way to express this formula, leveraging the geometric interpretation of the determinant as representing the scaling factor for area or volume under a linear transformation.
Formula for Area of a Triangle
Given the coordinates of the three vertices of a triangle in the Cartesian plane as $A(x_1, y_1)$, $B(x_2, y_2)$, and $C(x_3, y_3)$, the area of the triangle ABC can be calculated using the following determinant formula:
Area of triangle ABC $= \frac{1}{2} \left| \det \begin{bmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{bmatrix} \right| $
... (i)
Here, $\det \begin{bmatrix} \dots \end{bmatrix}$ represents the determinant of the $3 \times 3$ matrix formed using the vertex coordinates and a column of ones. The vertical bars outside the determinant ($| \det \dots |$) denote the absolute value. The calculation of the determinant can result in a positive or negative value depending on the order in which the vertices $(x_1, y_1), (x_2, y_2), (x_3, y_3)$ are taken (corresponding to a counterclockwise or clockwise orientation when traversing the vertices). Since the area of a geometric figure must be a non-negative quantity, we take the absolute value of the determinant before multiplying by $\frac{1}{2}$.
Expansion of the Determinant in the Area Formula
Let's expand the $3 \times 3$ determinant $D = \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix}$ using cofactor expansion. Expanding along the third column is particularly convenient since all elements in this column are 1. The elements in the third column are $a_{13}=1, a_{23}=1, a_{33}=1$.
The determinant is given by $\det(A) = a_{13}C_{13} + a_{23}C_{23} + a_{33}C_{33}$.
Using the cofactor formula $C_{ij} = (-1)^{i+j} M_{ij}$:
$$ D = 1 \cdot (-1)^{1+3} M_{13} + 1 \cdot (-1)^{2+3} M_{23} + 1 \cdot (-1)^{3+3} M_{33} $$ $$ D = (+1) M_{13} + (-1) M_{23} + (+1) M_{33} $$ $$ D = M_{13} - M_{23} + M_{33} $$Now, calculate the minors $M_{13}, M_{23}, M_{33}$. These are the determinants of the $2 \times 2$ submatrices obtained by removing the corresponding row and column:
$M_{13}$ (remove row 1, column 3):
$$ M_{13} = \begin{vmatrix} x_2 & y_2 \\ x_3 & y_3 \end{vmatrix} = (x_2 \times y_3) - (x_3 \times y_2) = x_2 y_3 - x_3 y_2 $$$M_{23}$ (remove row 2, column 3):
$$ M_{23} = \begin{vmatrix} x_1 & y_1 \\ x_3 & y_3 \end{vmatrix} = (x_1 \times y_3) - (x_3 \times y_1) = x_1 y_3 - x_3 y_1 $$$M_{33}$ (remove row 3, column 3):
$$ M_{33} = \begin{vmatrix} x_1 & y_1 \\ x_2 & y_2 \end{vmatrix} = (x_1 \times y_2) - (x_2 \times y_1) = x_1 y_2 - x_2 y_1 $$Substitute these minor values back into the expression for $D$:
$$ D = (x_2 y_3 - x_3 y_2) - (x_1 y_3 - x_3 y_1) + (x_1 y_2 - x_2 y_1) $$ $$ D = x_2 y_3 - x_3 y_2 - x_1 y_3 + x_3 y_1 + x_1 y_2 - x_2 y_1 $$Rearranging the terms to group by $x_i$ (this form is commonly seen in coordinate geometry):
$$ D = x_1 y_2 - x_1 y_3 + x_2 y_3 - x_2 y_1 + x_3 y_1 - x_3 y_2 $$ $$ D = x_1(y_2 - y_3) + x_2(y_3 - y_1) + x_3(y_1 - y_2) $$This matches the expression inside the absolute value in the standard coordinate geometry formula for the area of a triangle: Area $= \frac{1}{2} |x_1(y_2 - y_3) + x_2(y_3 - y_1) + x_3(y_1 - y_2)|$. The determinant formula is consistent with previous methods.
Condition for Collinearity of Three Points
Three points are said to be collinear if they all lie on the same straight line. If three points are collinear, they cannot form a triangle, and thus the area of the "triangle" formed by these points is zero.
Using the determinant formula for the area of a triangle, we can establish a condition for collinearity:
Three points $(x_1, y_1)$, $(x_2, y_2)$, and $(x_3, y_3)$ are collinear if and only if the area of the triangle formed by them is 0.
Setting the area formula equal to zero:
$$ \frac{1}{2} \left| \det \begin{bmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{bmatrix} \right| = 0 $$This equation is true if and only if the determinant itself is zero:
Three points $(x_1, y_1), (x_2, y_2), (x_3, y_3)$ are collinear $\iff \det \begin{bmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{bmatrix} = 0 $
... (ii)
Thus, to check if three given points are collinear, we can form the $3 \times 3$ matrix with their coordinates and a column of ones and calculate its determinant. If the determinant is 0, the points are collinear; otherwise, they form a triangle.
Example 4. Find the area of the triangle with vertices $A(1, 0)$, $B(6, 0)$, and $C(4, 3)$.
Answer:
The coordinates of the vertices are $(x_1, y_1) = (1, 0)$, $(x_2, y_2) = (6, 0)$, and $(x_3, y_3) = (4, 3)$.
Using the determinant formula for the area of a triangle:
$$ \text{Area} = \frac{1}{2} \left| \det \begin{bmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{bmatrix} \right| = \frac{1}{2} \left| \det \begin{bmatrix} 1 & 0 & 1 \\ 6 & 0 & 1 \\ 4 & 3 & 1 \end{bmatrix} \right| $$Let's calculate the determinant $D = \det \begin{bmatrix} 1 & 0 & 1 \\ 6 & 0 & 1 \\ 4 & 3 & 1 \end{bmatrix}$. Expanding along the second column is the most efficient method because it contains two zero elements. The elements of the second column are $0, 0, 3$. The sign pattern for the second column is $\begin{bmatrix} - \\ + \\ - \end{bmatrix}$.
$$ D = (1) \cdot C_{12} + (6) \cdot C_{22} + (4) \cdot C_{32} $$ $$ D = 0 \cdot C_{12} + 0 \cdot C_{22} + 3 \cdot C_{32} $$ $$ D = 3 \cdot C_{32} $$Calculate the cofactor $C_{32} = (-1)^{3+2} M_{32} = (-1)^5 M_{32} = -M_{32}$.
The minor $M_{32}$ is the determinant of the submatrix obtained by deleting the 3rd row and 2nd column of the original matrix:
$$ M_{32} = \begin{vmatrix} 1 & 1 \\ 6 & 1 \end{vmatrix} = (1 \times 1) - (6 \times 1) = 1 - 6 = -5 $$So, $C_{32} = -M_{32} = -(-5) = 5$.
Now, substitute the value of $C_{32}$ back into the determinant calculation:
$$ D = 3 \times C_{32} = 3 \times 5 = 15 $$The determinant value is 15. Now, calculate the area using the formula:
$$ \text{Area} = \frac{1}{2} |D| = \frac{1}{2} |15| = \frac{15}{2} = 7.5 $$The area of the triangle with vertices $A(1, 0), B(6, 0),$ and $C(4, 3)$ is $7.5$ square units.
Answer: The area of the triangle is $7.5$ square units.
Example 5. Show that the points $P(2, 3)$, $Q(4, 6)$, and $R(6, 9)$ are collinear.
Answer:
The coordinates of the given points are $(x_1, y_1) = (2, 3)$, $(x_2, y_2) = (4, 6)$, and $(x_3, y_3) = (6, 9)$.
The points are collinear if and only if the area of the triangle formed by them is zero, which is equivalent to the determinant $\begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix}$ being equal to zero.
Let's calculate the determinant using the given coordinates:
$$ D = \det \begin{bmatrix} 2 & 3 & 1 \\ 4 & 6 & 1 \\ 6 & 9 & 1 \end{bmatrix} $$We can expand this determinant along the first row:
$$ D = 2 \cdot C_{11} + 3 \cdot C_{12} + 1 \cdot C_{13} $$ $$ D = 2 \cdot (-1)^{1+1} M_{11} + 3 \cdot (-1)^{1+2} M_{12} + 1 \cdot (-1)^{1+3} M_{13} $$ $$ D = 2 M_{11} - 3 M_{12} + 1 M_{13} $$Calculate the minors:
$$ M_{11} = \begin{vmatrix} 6 & 1 \\ 9 & 1 \end{vmatrix} = (6 \times 1) - (9 \times 1) = 6 - 9 = -3 $$ $$ M_{12} = \begin{vmatrix} 4 & 1 \\ 6 & 1 \end{vmatrix} = (4 \times 1) - (6 \times 1) = 4 - 6 = -2 $$ $$ M_{13} = \begin{vmatrix} 4 & 6 \\ 6 & 9 \end{vmatrix} = (4 \times 9) - (6 \times 6) = 36 - 36 = 0 $$Substitute these minor values back into the determinant calculation:
$$ D = 2(-3) - 3(-2) + 1(0) $$ $$ D = -6 - (-6) + 0 $$ $$ D = -6 + 6 + 0 $$ $$ D = 0 $$Since the determinant is 0, the area of the triangle formed by the points $P, Q,$ and $R$ is $\frac{1}{2} |0| = 0$. A triangle with zero area implies that the vertices are collinear.
Therefore, the points $P(2, 3)$, $Q(4, 6)$, and $R(6, 9)$ are collinear.
Answer: The points $P(2, 3)$, $Q(4, 6)$, and $R(6, 9)$ are collinear as the determinant is 0.
Alternative Check for Collinearity (Using Determinant Properties)
Consider the matrix for the determinant: $\begin{bmatrix} 2 & 3 & 1 \\ 4 & 6 & 1 \\ 6 & 9 & 1 \end{bmatrix}$. Let's look at the columns: $C_1 = \begin{bmatrix} 2 \\ 4 \\ 6 \end{bmatrix}$, $C_2 = \begin{bmatrix} 3 \\ 6 \\ 9 \end{bmatrix}$, $C_3 = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}$.
Observe the relationship between $C_1$ and $C_2$. The elements of $C_2$ are $\frac{3}{2}$ times the corresponding elements of $C_1$:
$$ C_2 = \begin{bmatrix} 3 \\ 6 \\ 9 \end{bmatrix} = \begin{bmatrix} \frac{3}{2} \times 2 \\ \frac{3}{2} \times 4 \\ \frac{3}{2} \times 6 \end{bmatrix} = \frac{3}{2} \begin{bmatrix} 2 \\ 4 \\ 6 \end{bmatrix} = \frac{3}{2} C_1 $$Since the second column ($C_2$) is a scalar multiple of the first column ($C_1$), the two columns are proportional. According to Property 6 of determinants (If any two columns are proportional, the determinant is 0), the determinant of the matrix must be 0. This provides a quicker way to confirm collinearity if such a proportionality is easily noticeable.
The determinant formula for the area of a triangle provides a powerful connection between linear algebra and geometry, and the condition for collinearity derived from it is a useful application of determinant properties.
Adjoint of a Square Matrix
The adjoint (or adjugate) of a square matrix is a specific matrix derived from the cofactors of the elements of the original matrix. This concept is closely related to the determinant and the inverse of a matrix and is particularly useful in computing the inverse of a matrix, especially for $2 \times 2$ and $3 \times 3$ cases.
Definition of Adjoint Matrix
Let $A = [a_{ij}]$ be a square matrix of order $n \times n$. The adjoint of $A$, denoted by $\text{adj}(A)$ or $\text{adj } A$, is defined in two steps:
1. Find the matrix of cofactors: For each element $a_{ij}$ in the matrix $A$, calculate its cofactor $C_{ij}$. Recall that the cofactor $C_{ij}$ is given by $C_{ij} = (-1)^{i+j} M_{ij}$, where $M_{ij}$ is the minor of $a_{ij}$ (the determinant of the submatrix obtained by deleting the $i$-th row and $j$-th column of $A$). Form a new matrix, called the matrix of cofactors, where the element in the $i$-th row and $j$-th column is the cofactor $C_{ij}$ of the corresponding element $a_{ij}$ in $A$.
$\text{cof}(A) = [C_{ij}]_{n \times n} $
... (i)
2. Take the transpose of the cofactor matrix: The adjoint of matrix $A$ is the transpose of the matrix of cofactors of $A$.
$\text{adj}(A) = (\text{cof}(A))^T = [C_{ji}]_{n \times n} $
... (ii)
This means the element in the $i$-th row and $j$-th column of the adjoint matrix $\text{adj}(A)$ is the cofactor $C_{ji}$ of the element $a_{ji}$ located at the $j$-th row and $i$-th column of the original matrix $A$.
Adjoint of a $2 \times 2$ Matrix
Let's find the adjoint of a general $2 \times 2$ matrix $A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$.
The elements are $a_{11}=a, a_{12}=b, a_{21}=c, a_{22}=d$.
Calculate the minors $M_{ij}$:
- $M_{11}$ (delete row 1, column 1) $= \det[d] = d$.
- $M_{12}$ (delete row 1, column 2) $= \det[c] = c$.
- $M_{21}$ (delete row 2, column 1) $= \det[b] = b$.
- $M_{22}$ (delete row 2, column 2) $= \det[a] = a$.
Calculate the cofactors $C_{ij} = (-1)^{i+j} M_{ij}$:
- $C_{11} = (-1)^{1+1} M_{11} = (+1)(d) = d$.
- $C_{12} = (-1)^{1+2} M_{12} = (-1)(c) = -c$.
- $C_{21} = (-1)^{2+1} M_{21} = (-1)(b) = -b$.
- $C_{22} = (-1)^{2+2} M_{22} = (+1)(a) = a$.
The matrix of cofactors is:
$$ \text{cof}(A) = \begin{bmatrix} C_{11} & C_{12} \\ C_{21} & C_{22} \end{bmatrix} = \begin{bmatrix} d & -c \\ -b & a \end{bmatrix} $$The adjoint of $A$ is the transpose of the cofactor matrix:
$\text{adj}(A) = (\text{cof}(A))^T = \begin{bmatrix} d & -c \\ -b & a \end{bmatrix}^T = \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} $
... (iii)
Shortcut for $2 \times 2$ Adjoint: For a $2 \times 2$ matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$, the adjoint can be found quickly by swapping the elements on the main diagonal ($a$ and $d$) and changing the signs of the elements on the off-diagonal ($b$ and $c$).
Example of Adjoint of a $2 \times 2$ Matrix
Example 6. Find the adjoint of the matrix $A = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix}$.
Answer:
The given matrix is $A = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix}$. This is a $2 \times 2$ matrix of the form $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$ with $a=2, b=3, c=4, d=5$.
Using the shortcut rule for the adjoint of a $2 \times 2$ matrix:
Swap the diagonal elements ($a$ and $d$): Swap 2 and 5 $\implies \begin{bmatrix} 5 & \cdot \\ \cdot & 2 \end{bmatrix}$.
Change the signs of the off-diagonal elements ($b$ and $c$): Change sign of 3 to -3, and change sign of 4 to -4 $\implies \begin{bmatrix} \cdot & -3 \\ -4 & \cdot \end{bmatrix}$.
Combining these gives the adjoint matrix:
$$ \text{adj}(A) = \begin{bmatrix} 5 & -3 \\ -4 & 2 \end{bmatrix} $$Alternatively, we can follow the general steps of finding cofactors and taking the transpose:
- $C_{11} = (-1)^{1+1} M_{11} = (+1)(5) = 5$.
- $C_{12} = (-1)^{1+2} M_{12} = (-1)(4) = -4$.
- $C_{21} = (-1)^{2+1} M_{21} = (-1)(3) = -3$.
- $C_{22} = (-1)^{2+2} M_{22} = (+1)(2) = 2$.
The matrix of cofactors is $\begin{bmatrix} 5 & -4 \\ -3 & 2 \end{bmatrix}$.
The adjoint is the transpose of the cofactor matrix:
$$ \text{adj}(A) = \begin{bmatrix} 5 & -4 \\ -3 & 2 \end{bmatrix}^T = \begin{bmatrix} 5 & -3 \\ -4 & 2 \end{bmatrix} $$Both methods give the same result.
Answer: $\text{adj}(A) = \begin{bmatrix} 5 & -3 \\ -4 & 2 \end{bmatrix}$.
Adjoint of a $3 \times 3$ Matrix
For a $3 \times 3$ matrix $A = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix}$, finding the adjoint involves calculating all 9 cofactors $C_{ij} = (-1)^{i+j} M_{ij}$. The minors $M_{ij}$ are $2 \times 2$ determinants.
First, find the matrix of cofactors $\text{cof}(A) = \begin{bmatrix} C_{11} & C_{12} & C_{13} \\ C_{21} & C_{22} & C_{23} \\ C_{31} & C_{32} & C_{33} \end{bmatrix}$.
The adjoint is the transpose of this cofactor matrix:
$\text{adj}(A) = (\text{cof}(A))^T = \begin{bmatrix} C_{11} & C_{21} & C_{31} \\ C_{12} & C_{22} & C_{32} \\ C_{13} & C_{23} & C_{33} \end{bmatrix} $
... (iv)
Notice that the element in the first row, second column of $\text{adj}(A)$ is $C_{21}$ (the cofactor of $a_{21}$), not $C_{12}$. This is due to the transpose.
Example of Adjoint of a $3 \times 3$ Matrix
Example 7. Find the adjoint of the matrix $B = \begin{bmatrix} 1 & 2 & 3 \\ 0 & 1 & 0 \\ 2 & 4 & 5 \end{bmatrix}$.
Answer:
The given matrix is $B = \begin{bmatrix} 1 & 2 & 3 \\ 0 & 1 & 0 \\ 2 & 4 & 5 \end{bmatrix}$. We need to find the cofactor $C_{ij} = (-1)^{i+j} M_{ij}$ for each of the 9 elements.
- $C_{11} = (-1)^{1+1} \det \begin{vmatrix} 1 & 0 \\ 4 & 5 \end{vmatrix} = (1)(1 \times 5 - 4 \times 0) = 5 - 0 = 5$.
- $C_{12} = (-1)^{1+2} \det \begin{vmatrix} 0 & 0 \\ 2 & 5 \end{vmatrix} = (-1)(0 \times 5 - 2 \times 0) = (-1)(0) = 0$.
- $C_{13} = (-1)^{1+3} \det \begin{vmatrix} 0 & 1 \\ 2 & 4 \end{vmatrix} = (1)(0 \times 4 - 2 \times 1) = 0 - 2 = -2$.
- $C_{21} = (-1)^{2+1} \det \begin{vmatrix} 2 & 3 \\ 4 & 5 \end{vmatrix} = (-1)(2 \times 5 - 4 \times 3) = (-1)(10 - 12) = (-1)(-2) = 2$.
- $C_{22} = (-1)^{2+2} \det \begin{vmatrix} 1 & 3 \\ 2 & 5 \end{vmatrix} = (1)(1 \times 5 - 2 \times 3) = 5 - 6 = -1$.
- $C_{23} = (-1)^{2+3} \det \begin{vmatrix} 1 & 2 \\ 2 & 4 \end{vmatrix} = (-1)(1 \times 4 - 2 \times 2) = (-1)(4 - 4) = (-1)(0) = 0$.
- $C_{31} = (-1)^{3+1} \det \begin{vmatrix} 2 & 3 \\ 1 & 0 \end{vmatrix} = (1)(2 \times 0 - 1 \times 3) = 0 - 3 = -3$.
- $C_{32} = (-1)^{3+2} \det \begin{vmatrix} 1 & 3 \\ 0 & 0 \end{vmatrix} = (-1)(1 \times 0 - 0 \times 3) = (-1)(0) = 0$.
- $C_{33} = (-1)^{3+3} \det \begin{vmatrix} 1 & 2 \\ 0 & 1 \end{vmatrix} = (1)(1 \times 1 - 0 \times 2) = 1 - 0 = 1$.
The matrix of cofactors $\text{cof}(B)$ is:
$$ \text{cof}(B) = \begin{bmatrix} 5 & 0 & -2 \\ 2 & -1 & 0 \\ -3 & 0 & 1 \end{bmatrix} $$The adjoint of $B$ is the transpose of the cofactor matrix:
$$ \text{adj}(B) = (\text{cof}(B))^T = \begin{bmatrix} 5 & 0 & -2 \\ 2 & -1 & 0 \\ -3 & 0 & 1 \end{bmatrix}^T = \begin{bmatrix} 5 & 2 & -3 \\ 0 & -1 & 0 \\ -2 & 0 & 1 \end{bmatrix} $$Thus, the adjoint of matrix $B$ is $\begin{bmatrix} 5 & 2 & -3 \\ 0 & -1 & 0 \\ -2 & 0 & 1 \end{bmatrix}$.
Relationship between a Matrix, its Adjoint, and its Determinant
There exists a fundamental and powerful relationship between a square matrix $A$, its adjoint $\text{adj}(A)$, and its determinant $\det(A)$. This relationship is the key to deriving the formula for the inverse of a matrix.
For any square matrix $A$ of order $n$, the product of $A$ and its adjoint (in either order) is equal to the determinant of $A$ multiplied by the identity matrix of the same order $n$.
$A (\text{adj } A) = (\text{adj } A) A = \det(A) I_n $
... (v)
where $I_n$ is the identity matrix of order $n$.
Explanation/Derivation: Let $A = [a_{ij}]$ and $\text{adj}(A) = [C_{ji}]$ (where $C_{ji}$ is the cofactor of $a_{ji}$ in $A$). Consider the product $A (\text{adj } A)$. The element in the $i$-th row and $k$-th column of the product matrix $A (\text{adj } A)$ is the dot product of the $i$-th row of $A$ and the $k$-th column of $\text{adj}(A)$.
$$ (A (\text{adj } A))_{ik} = \sum_{j=1}^n a_{ij} (\text{adj } A)_{jk} $$By the definition of the adjoint, $(\text{adj } A)_{jk} = C_{kj}$ (the cofactor of the element $a_{kj}$ in $A$). So,
$$ (A (\text{adj } A))_{ik} = \sum_{j=1}^n a_{ij} C_{kj} $$Now, consider the value of this sum based on whether $i = k$ or $i \neq k$:
- If $i = k$ (elements on the main diagonal of the product): The sum becomes $\sum_{j=1}^n a_{ij} C_{ij}$. This sum is the cofactor expansion of the determinant of $A$ along the $i$-th row (since we are multiplying elements of the $i$-th row $a_{ij}$ by their corresponding cofactors $C_{ij}$). This sum is always equal to the determinant of $A$, regardless of the row chosen for expansion. So, $(A (\text{adj } A))_{ii} = \det(A)$.
- If $i \neq k$ (elements off the main diagonal of the product): The sum is $\sum_{j=1}^n a_{ij} C_{kj}$. This sum represents the determinant of a matrix that is obtained from $A$ by replacing its $k$-th row with its $i$-th row (the cofactors are still calculated based on the original positions relative to the $k$-th row). In this modified matrix, the $i$-th row and the $k$-th row would be identical. A matrix with two identical rows has a determinant of 0 (from properties of determinants). Thus, $\sum_{j=1}^n a_{ij} C_{kj} = 0$ when $i \neq k$.
Combining these two cases, the matrix $A (\text{adj } A)$ is a matrix where all elements on the main diagonal are equal to $\det(A)$, and all off-diagonal elements are 0. This is precisely the definition of the scalar matrix $\det(A) I_n$, where $I_n$ is the identity matrix of order $n$.
$$ A (\text{adj } A) = \begin{bmatrix} \det(A) & 0 & \dots & 0 \\ 0 & \det(A) & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \det(A) \end{bmatrix} = \det(A) \begin{bmatrix} 1 & 0 & \dots & 0 \\ 0 & 1 & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & 1 \end{bmatrix} = \det(A) I_n $$A similar derivation for $(\text{adj } A) A = \det(A) I_n$ involves considering the element $((\text{adj } A) A)_{ki} = \sum_{j=1}^n (\text{adj } A)_{kj} a_{ji} = \sum_{j=1}^n C_{jk} a_{ji}$. This sum is the cofactor expansion along the $j$-th column with cofactors of the $k$-th column. This sum is $\det(A)$ if $k=i$ (expansion along the $i$-th column) and 0 if $k \neq i$ (determinant of a matrix with duplicate rows).
Inverse of a Matrix using Adjoint
The fundamental relationship $A (\text{adj } A) = (\text{adj } A) A = \det(A) I_n$ provides a direct formula for computing the inverse of a square matrix, provided the inverse exists.
From the equation $A (\text{adj } A) = \det(A) I_n$, if the determinant $\det(A)$ is non-zero, we can divide both sides by $\det(A)$ (which is a scalar):
$$ \frac{1}{\det(A)} [A (\text{adj } A)] = \frac{1}{\det(A)} [\det(A) I_n] $$Using the property of scalar multiplication $(kM)N = k(MN)$:
$$ A \left( \frac{1}{\det(A)} \text{adj } A \right) = \left( \frac{\det(A)}{\det(A)} \right) I_n $$ $$ A \left( \frac{1}{\det(A)} \text{adj } A \right) = 1 \cdot I_n $$ $$ A \left( \frac{1}{\det(A)} \text{adj } A \right) = I_n $$Similarly, from $(\text{adj } A) A = \det(A) I_n$, we get:
$$ \left( \frac{1}{\det(A)} \text{adj } A \right) A = I_n $$By the definition of the inverse matrix (a matrix $B$ is the inverse of $A$ if $AB = BA = I_n$), the matrix $\frac{1}{\det(A)} \text{adj } A$ is the inverse of $A$.
If $\det(A) \neq 0$, then $A^{-1} = \frac{1}{\det(A)} \text{adj}(A) $
... (vi)
This formula is valid if and only if the determinant of A is non-zero ($\det(A) \neq 0$). If $\det(A) = 0$, the formula would involve division by zero, which is undefined. In this case, the matrix $A$ is singular and does not have an inverse.
Conclusion: A square matrix $A$ is invertible (non-singular) if and only if its determinant is non-zero ($\det(A) \neq 0$). If $\det(A) = 0$, the matrix is singular and its inverse does not exist.
The adjoint matrix serves as a crucial intermediate step in calculating the inverse of a matrix using this formula, particularly useful for matrices of small orders (like $2 \times 2$ and $3 \times 3$).